15 research outputs found

    Reference and the facilitation of search in spatial domains

    Get PDF
    This is a pre-final version of the article, whose official publication is expected in the winter of 2013-14.Peer reviewedPreprin

    Generating Easy References : the case of document deixis.

    Get PDF
    This work is part of an ongoing PhD project of the first author and it has been supported by the CNPq, the Brazilian Research Council.Publisher PD

    Using \u27Low-cost\u27 Learning Features for Pronoun Resolution

    Get PDF
    PACLIC / The University of the Philippines Visayas Cebu College Cebu City, Philippines / November 20-22, 200

    Um Sistema de RealizaĆ§Ć£o Superficial para GeraĆ§Ć£o de Textos em PortuguĆŖs

    Get PDF
    Sistemas de geraĆ§Ć£o de lĆ­ngua natural (GLN) - que produzem texto a partir de dados nĆ£o-linguĆ­sticos - possuem uma ampla gama de aplicaƧƵes em visualizaĆ§Ć£o textual de conteĆŗdos complexos e/ou em grandes volumes. Este trabalho enfoca a implementaĆ§Ć£o de um mĆ³dulo de realizaĆ§Ć£o textual baseado em regras para o portuguĆŖs brasileiro, chamado PortNLG, que trata da tarefa de linearizaĆ§Ć£o sentencialĀ para aplicaƧƵes computacionais que necessitem apresentar dados de saĆ­da em formato textual. PortNLG Ć© apresentado na forma de uma biblioteca JAVA, e seus resultados sĆ£o superiores aos de modelos de n-gramas na tarefa de geraĆ§Ć£o de manchetes de jornal

    Generating Easy References : the case of document deixis.

    No full text
    This work is part of an ongoing PhD project of the first author and it has been supported by the CNPq, the Brazilian Research Council.Publisher PD

    Myers-Briggs personality classification from social media text using pre-trained language models

    No full text
    In Natural Language Processing, the use of pre-trained language models has been shown to obtain state-of-the-art results in many downstream tasks such as sentiment analysis, author identification and others. In this work, we address the use of these methods for personality classification from text. Focusing on the Myers-Briggs (MBTI) personality model, we describe a series of experiments in which the well-known Bidirectional Encoder Representations from Transformers (BERT) model is fine-tuned to perform MBTI classification. Our main findings suggest that the current approach significantly outperforms well-known text classification models based on bag-of-words and static word embeddings alike across multiple evaluation scenarios, and generally outperforms previous work in the field
    corecore